ilp system
Relational decomposition for program synthesis
Hocquette, Céline, Cropper, Andrew
We introduce a novel approach to program synthesis that decomposes complex functional tasks into simpler relational synthesis sub-tasks. We demonstrate the effectiveness of our approach using an off-the-shelf inductive logic programming (ILP) system on three challenging datasets. Our results show that (i) a relational representation can outperform a functional one, and (ii) an off-the-shelf ILP system with a relational encoding can outperform domain-specific approaches.
A Critical Review of Inductive Logic Programming Techniques for Explainable AI
Zhang, Zheng, Xu, Liangliang, Yilmaz, Levent, Liu, Bo
Despite recent advances in modern machine learning algorithms, the opaqueness of their underlying mechanisms continues to be an obstacle in adoption. To instill confidence and trust in artificial intelligence systems, Explainable Artificial Intelligence has emerged as a response to improving modern machine learning algorithms' explainability. Inductive Logic Programming (ILP), a subfield of symbolic artificial intelligence, plays a promising role in generating interpretable explanations because of its intuitive logic-driven framework. ILP effectively leverages abductive reasoning to generate explainable first-order clausal theories from examples and background knowledge. However, several challenges in developing methods inspired by ILP need to be addressed for their successful application in practice. For example, existing ILP systems often have a vast solution space, and the induced solutions are very sensitive to noises and disturbances. This survey paper summarizes the recent advances in ILP and a discussion of statistical relational learning and neural-symbolic algorithms, which offer synergistic views to ILP. Following a critical review of the recent advances, we delineate observed challenges and highlight potential avenues of further ILP-motivated research toward developing self-explanatory artificial intelligence systems.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.14)
- North America > United States > Virginia (0.04)
- North America > United States > Alabama > Lee County > Auburn (0.04)
- North America > United States > New Jersey (0.04)
- Research Report (1.00)
- Overview (0.86)
Learning logic programs by discovering where not to search
Cropper, Andrew, Hocquette, Céline
The goal of inductive logic programming (ILP) is to search for a hypothesis that generalises training examples and background knowledge (BK). To improve performance, we introduce an approach that, before searching for a hypothesis, first discovers where not to search. We use given BK to discover constraints on hypotheses, such as that a number cannot be both even and odd. We use the constraints to bootstrap a constraint-driven ILP system. Our experiments on multiple domains (including program synthesis and game playing) show that our approach can (i) substantially reduce learning times by up to 97%, and (ii) scale to domains with millions of facts.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Bulgaria > Plovdiv Province > Plovdiv (0.04)
Constraint-driven multi-task learning
Cretu, Bogdan, Cropper, Andrew
Inductive logic programming is a form of machine learning based on mathematical logic that generates logic programs from given examples and background knowledge. In this project, we extend the Popper ILP system to make use of multi-task learning. We implement the state-of-the-art approach and several new strategies to improve search performance. Furthermore, we introduce constraint preservation, a technique that improves overall performance for all approaches. Constraint preservation allows the system to transfer knowledge between updates on the background knowledge set. Consequently, we reduce the amount of repeated work performed by the system. Additionally, constraint preservation allows us to transition from the current state-of-the-art iterative deepening search approach to a more efficient breadth first search approach. Finally, we experiment with curriculum learning techniques and show their potential benefit to the field.
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Overview (1.00)
- Research Report > New Finding (0.93)
- Research Report > Promising Solution (0.88)
Inductive Logic Programming At 30: A New Introduction
Cropper, Andrew (University of Oxford) | Dumančić, Sebastijan (TU Delft)
Inductive logic programming (ILP) is a form of machine learning. The goal of ILP is to induce a hypothesis (a set of logical rules) that generalises training examples. As ILP turns 30, we provide a new introduction to the field. We introduce the necessary logical notation and the main learning settings; describe the building blocks of an ILP system; compare several systems on several dimensions; describe four systems (Aleph, TILDE, ASPAL, and Metagol); highlight key application areas; and, finally, summarise current limitations and directions for future research.
- Europe > Austria > Vienna (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- (53 more...)
- Overview (0.67)
- Research Report (0.67)
- Instructional Material > Course Syllabus & Notes (0.47)
- Leisure & Entertainment > Games (1.00)
- Education (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.67)
Preprocessing in Inductive Logic Programming
Inductive logic programming is a type of machine learning in which logic programs are learned from examples. This learning typically occurs relative to some background knowledge provided as a logic program. This dissertation introduces bottom preprocessing, a method for generating initial constraints on the programs an ILP system must consider. Bottom preprocessing applies ideas from inverse entailment to modern ILP systems. Inverse entailment is an influential early ILP approach introduced with Progol. This dissertation also presents $\bot$-Popper, an implementation of bottom preprocessing for the modern ILP system Popper. It is shown experimentally that bottom preprocessing can reduce learning times of ILP systems on hard problems. This reduction can be especially significant when the amount of background knowledge in the problem is large.
- Research Report (1.00)
- Summary/Review (0.68)
Predicate Invention by Learning From Failures
Discovering novel high-level concepts is one of the most important steps needed for human-level AI. In inductive logic programming (ILP), discovering novel high-level concepts is known as predicate invention (PI). Although seen as crucial since the founding of ILP, PI is notoriously difficult and most ILP systems do not support it. In this paper, we introduce POPPI, an ILP system that formulates the PI problem as an answer set programming problem. Our experiments show that (i) PI can drastically improve learning performance when useful, (ii) PI is not too costly when unnecessary, and (iii) POPPI can substantially outperform existing ILP systems.
Inductive logic programming at 30
Cropper, Andrew, Dumančić, Sebastijan, Evans, Richard, Muggleton, Stephen H.
Inductive logic programming (ILP) is a form of logic-based machine learning. The goal of ILP is to induce a hypothesis (a logic program) that generalises given training examples and background knowledge. As ILP turns 30, we survey recent work in the field. In this survey, we focus on (i) new meta-level search methods, (ii) techniques for learning recursive programs that generalise from few examples, (iii) new approaches for predicate invention, and (iv) the use of different technologies, notably answer set programming and neural networks. We conclude by discussing some of the current limitations of ILP and discuss directions for future research.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- (7 more...)
- Overview (0.86)
- Research Report (0.82)
- Education (1.00)
- Health & Medicine (0.67)
- Leisure & Entertainment > Games (0.46)
Inductive logic programming at 30: a new introduction
Cropper, Andrew, Dumančić, Sebastijan
Inductive logic programming (ILP) is a form of machine learning. The goal of ILP is to induce a hypothesis (a set of logical rules) that generalises given training examples. In contrast to most forms of machine learning, ILP can learn human-readable hypotheses from small amounts of data. As ILP approaches 30, we provide a new introduction to the field. We introduce the necessary logical notation and the main ILP learning settings. We describe the main building blocks of an ILP system. We compare several ILP systems on several dimensions. We describe in detail four systems (Aleph, TILDE, ASPAL, and Metagol).
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > Oregon > Multnomah County > Portland (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- (51 more...)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (0.47)
- Leisure & Entertainment > Games (1.00)
- Education (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.92)
Inducing game rules from varying quality game play
General Game Playing (GGP) is a framework in which an artificial intelligence program is required to play a variety of games successfully. It acts as a test bed for AI and motivator of research. The AI is given a random game description at runtime which it then plays. The framework includes repositories of game rules. The Inductive General Game Playing (IGGP) problem challenges machine learning systems to learn these GGP game rules by watching the game being played. In other words, IGGP is the problem of inducing general game rules from specific game observations. Inductive Logic Programming (ILP) has shown to be a promising approach to this problem though it has been demonstrated that it is still a hard problem for ILP systems. Existing work on IGGP has always assumed that the game player being observed makes random moves. This is not representative of how a human learns to play a game. With random gameplay situations that would normally be encountered when humans play are not present. To address this limitation, we analyse the effect of using intelligent versus random gameplay traces as well as the effect of varying the number of traces in the training set. We use Sancho, the 2014 GGP competition winner, to generate intelligent game traces for a large number of games. We then use the ILP systems, Metagol, Aleph and ILASP to induce game rules from the traces. We train and test the systems on combinations of intelligent and random data including a mixture of both. We also vary the volume of training data. Our results show that whilst some games were learned more effectively in some of the experiments than others no overall trend was statistically significant. The implications of this work are that varying the quality of training data as described in this paper has strong effects on the accuracy of the learned game rules; however one solution does not work for all games.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- South America > Argentina > Pampas > Buenos Aires F.D. > Buenos Aires (0.04)
- (5 more...)